-
Notifications
You must be signed in to change notification settings - Fork 221
aws-smithy-http-server - http/1 support #4373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature/http-1.x
Are you sure you want to change the base?
Conversation
| uuid = { version = "1.1.2", features = ["v4", "fast-rng"], optional = true } | ||
|
|
||
| [dev-dependencies] | ||
| aws-smithy-legacy-http-server = { path = "../aws-smithy-legacy-http-server" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure I'll find out but...why do we have this dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This dependency along with its benchmark, needs to be removed. While it was initially added to include some AI-generated benchmarking, the generated benchmark needs significant improvement. However, I'd prefer not to delay this PR further by waiting for a proper benchmark framework to be implemented.
| println!("════════════════════════════════════════════════════════════"); | ||
| println!(); | ||
| println!("📊 Summary"); | ||
| println!("────────────────────────────────────────────────────────────"); | ||
| println!(); | ||
|
|
||
| let speedup = new_results.requests_per_sec / legacy_results.requests_per_sec; | ||
| let latency_improvement = (legacy_results.avg_latency_micros - new_results.avg_latency_micros) | ||
| / legacy_results.avg_latency_micros * 100.0; | ||
|
|
||
| println!(" Legacy (hyper 0.14): {:.2} req/s", legacy_results.requests_per_sec); | ||
| println!(" New (hyper 1.x): {:.2} req/s", new_results.requests_per_sec); | ||
| println!(); | ||
| println!(" Throughput change: {:.2}% ({:.2}x)", (speedup - 1.0) * 100.0, speedup); | ||
| println!(" Latency improvement: {:.2}%", latency_improvement); | ||
| println!(); | ||
|
|
||
| if speedup > 1.01 { | ||
| println!(" ✅ New server is faster!"); | ||
| } else if speedup < 0.99 { | ||
| println!(" ⚠️ Legacy server was faster"); | ||
| } else { | ||
| println!(" ≈ Performance is similar"); | ||
| } | ||
| println!(); | ||
| println!("════════════════════════════════════════════════════════════"); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is actually...a scientific comparison / load test or anything
| // The listener limits concurrent connections to 100. | ||
| // Once 100 connections are active, new connections will wait at the OS level | ||
| // until an existing connection completes. | ||
| let listener = TcpListener::bind("0.0.0.0:3000").await?.limit_connections(100); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
huh...I wonder if we want an extension trait here or something more explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It comes from ListenerExt imported at the top—happy to hear what you had in mind for making it more explicit.
|
|
||
| // Limit concurrent requests at the application level | ||
| let app = ServiceBuilder::new() | ||
| .layer(ConcurrencyLimitLayer::new(50)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should NEVER have examples of ConcurrencyLimit that are not combined with LoadShed because it leads to memory / connection leaks otherwise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're absolutely right about this. I'll remove the example entirely—the intent was to show how to apply a Tower layer, but this topic really belongs in a dedicated production hardening guide, not as a standalone example suggesting there's a 'one size fits all' solution.
| //! Example comparing handle_connection vs handle_connection_strategy performance. | ||
| //! | ||
| //! This example starts a simple HTTP server that responds with "Hello, World!" | ||
| //! and can be load tested with tools like `oha`. | ||
| //! | ||
| //! Usage: | ||
| //! # Run with strategy pattern implementation (uses handle_connection_strategy) | ||
| //! cargo run --example serve_comparison --release | ||
| //! | ||
| //! # Then in another terminal, run load test: | ||
| //! oha -z 30s -c 100 http://127.0.0.1:3000/ | ||
| //! | ||
| //! To test the original branching implementation, you would need to modify | ||
| //! the serve module to export handle_connection and use it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't appear to be comparing anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, the name is misleading—it's not actually comparing anything. This was just being used to test the server independently against workloads using oha/wrk. I'll remove it.
| #![cfg_attr( | ||
| not(feature = "request-id"), | ||
| allow(unused_imports, dead_code, unreachable_code) | ||
| )] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can cfg out the entire example? Or can you not because its an example and not a module?
This PR migrates
aws-smithy-http-serverfrom[email protected]/[email protected]to[email protected]/[email protected].Background
This is the second PR in the HTTP 1.x migration series:
aws-smithy-legacy-httpandaws-smithy-legacy-http-serverfor[email protected]supportaws-smithy-http-serverto[email protected]/[email protected]http-1xflag and conditional dependency logicKey Changes
1. Dependency Upgrades
0.2.9→1.x0.4.5→1.00.14.26→1.x(withserver,http1,http2features)0.1for server utilities (server-auto,server-graceful, etc.)0.1for body utilities0.3→0.60.8.4→0.17(for AWS Lambda support)Updated
aws-smithy-typesto usehttp-body-1-xfeature instead ofhttp-body-0-4-x.2. New
serveModuleAdded a comprehensive
servemodule (inspired byaxum::serve) that provides:serve(listener, service)functionListenertrait supporting TCP, Unix sockets, and custom transportslimit_connections()viaListenerExt.with_graceful_shutdown().configure_hyper()for protocol and performance tuningFiles:
src/serve/mod.rs(794 lines),src/serve/listener.rs(269 lines)3. API Updates Throughout
Updated all modules to work with
[email protected]APIs:[email protected]traits (+176 lines)[email protected](+299 lines)[email protected]error types4. Comprehensive Testing
Migration Impact
Breaking Change: This updates the public API to use
[email protected]types (http::Request,http::Response, etc.).Users will need to:
[email protected]serve()API or update to[email protected]connection APIsBenefits
servemodule provides simpler server setup[email protected]performance improvementsTesting
Next Steps
The next PR will update the codegen to conditionally generate code that uses either:
aws-smithy-legacy-http-server(for[email protected]- default)aws-smithy-http-server(for[email protected]- whenhttp-1xflag enabled)